4 research outputs found
Some Intra-Frame and Inter-Frame Processing Schemes for Efficient Video Compression
Rapid increase in digital applications due to recent advances in digital communication and devices needs significant video information storing, processing and transmitting. But the amount of original captured video data is huge and thus makes the system complex in all kind of video processing.But applications demand a faster transmission in different sized electronic devices with good quality.Along with, limited bandwidth and memory for storage makes it challenging. These practical constraints for processing a huge amount of video data, makes video compression as active and challenging field of research. The aim of video compression is to remove redundancy of raw video while maintaining the quality and fidelity. For inter frame processing, motion estimation technique is significantly used to reduce temporal redundancy in almost all the video coding standards e.g. MPEG2, MPEG4, H264/AVC which uses state-of-art algorithm to provide higher compression with a perceptual quality.Though motion estimation is main contributor for higher compression, this is the most computationally complex part of video coding tools. So, it is always a requirement to design an algorithm that is both faster and accurate and provides higher compression but good quality output. The goal of this project is to propose an algorithm for motion estimation which will meet all the requirements and overcome all the practical limitations. In this thesis we analyze the motion of video sequences and some novel block matching based motion estimation algorithms are proposed to improve video coding efficiency in inter frame processing. Particle Swarm Optimization technique and Differential Evolutionary model is used for fast and accurate motion estimation and compensation. Spatial and temporal correlation is adapted for initial population. We followed some strategy for adaptive generations, particle population, particle location history preservation and exploitation. The experimental result shows that our proposed algorithm is efficient to maintain the accuracy. There is significant reduction of search points and thus computational complexity while achieving comparable performance in video coding. Spatial domain redundancy is reduced skipping the irrelevant or spatially co-related data by different sub-sampling algorithm.The sub-sampled intra-frame is up-sampled at the receiver side. The up-sampled high resolution frame requires to have good quality . The existing up-sampling or interpolation techniques produce undesirable blurring and ringing artifacts. To alleviate this problem, a novel spatio-temporal pre-processing approach is proposed to improve the quality. The proposed method use low frequency DCT (Discrete cosine transform) component to sub-sample the frame at the transmitter side. In transmitter side a preprocessing method is proposed where the received subsampled frame is passed through a Wiener filter which uses its local statistics in 3×3 neighborhood to modify pixel values. The output of Wiener filter is added with optimized multiple of high frequency component. The output is then passed through a DCT block to up-sample. Result shows that the proposed method outperforms popularly used interpolation techniques in terms of quality measure
Transmission Map and Atmospheric Light Guided Iterative Updater Network for Single Image Dehazing
Hazy images obscure content visibility and hinder several subsequent computer
vision tasks. For dehazing in a wide variety of hazy conditions, an end-to-end
deep network jointly estimating the dehazed image along with suitable
transmission map and atmospheric light for guidance could prove effective. To
this end, we propose an Iterative Prior Updated Dehazing Network (IPUDN) based
on a novel iterative update framework. We present a novel convolutional
architecture to estimate channel-wise atmospheric light, which along with an
estimated transmission map are used as priors for the dehazing network. Use of
channel-wise atmospheric light allows our network to handle color casts in hazy
images. In our IPUDN, the transmission map and atmospheric light estimates are
updated iteratively using corresponding novel updater networks. The iterative
mechanism is leveraged to gradually modify the estimates toward those
appropriately representing the hazy condition. These updates occur jointly with
the iterative estimation of the dehazed image using a convolutional neural
network with LSTM driven recurrence, which introduces inter-iteration
dependencies. Our approach is qualitatively and quantitatively found effective
for synthetic and real-world hazy images depicting varied hazy conditions, and
it outperforms the state-of-the-art. Thorough analyses of IPUDN through
additional experiments and detailed ablation studies are also presented.Comment: First two authors contributed equally. This work has been submitted
to the IEEE for possible publication. Copyright may be transferred without
notice, after which this version may no longer be accessible. Project
Website: https://aupendu.github.io/iterative-dehaz